Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
                                            Some full text articles may not yet be available without a charge during the embargo (administrative interval).
                                        
                                        
                                        
                                            
                                                
                                             What is a DOI Number?
                                        
                                    
                                
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
- 
            Free, publicly-accessible full text available September 18, 2026
- 
            Free, publicly-accessible full text available September 18, 2026
- 
            Free, publicly-accessible full text available August 22, 2026
- 
            Free, publicly-accessible full text available September 18, 2026
- 
            Free, publicly-accessible full text available September 18, 2026
- 
            Free, publicly-accessible full text available September 18, 2026
- 
            Free, publicly-accessible full text available September 18, 2026
- 
            Applications such as unbalanced and fully shuffled regression can be approached by optimizing regularized optimal transport (OT) distances, including the entropic OT and Sinkhorn distances. A common approach for this optimization is to use a first-order optimizer, which requires the gradient of the OT distance. For faster convergence, one might also resort to a second-order optimizer, which additionally requires the Hessian. The computations of these derivatives are crucial for efficient and accurate optimization. However, they present significant challenges in terms of memory consumption and numerical instability, especially for large datasets and small regularization strengths. We circumvent these issues by analytically computing the gradients for OT distances and the Hessian for the entropic OT distance, which was not previously used due to intricate tensorwise calculations and the complex dependency on parameters within the bi-level loss function. Through analytical derivation and spectral analysis, we identify and resolve the numerical instability caused by the singularity and ill-posedness of a key linear system. Consequently, we achieve scalable and stable computation of the Hessian, enabling the implementation of the stochastic gradient descent (SGD)-Newton methods. Tests on shuffled regression examples demonstrate that the second stage of the SGD-Newton method converges orders of magnitude faster than the gradient descent-only method while achieving significantly more accurate parameter estimations.more » « lessFree, publicly-accessible full text available June 30, 2026
- 
            Free, publicly-accessible full text available July 13, 2026
- 
            Free, publicly-accessible full text available June 9, 2026
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
